Lecture � Ken Norman, Princeton, Neural networks

Greg Detre

Monday, April 14, 2003

 

Simple orrelational semantics model

localist input of sentence, all words together without word order

 

can you have multiple representations of the same word (e.g./i.e. polysemy, or different meanings)???

 

when it�s being fed the sentence (and during training it�s told which word is playing which role) it�s not really learning syntax, because it�s being taught that � how would you go about building a net to learn syntax???

you�d need multi-modal/experiential information� you�d need a situation for the lexical concepts to latch onto

 

none of these are scalable, because they all use localist representations

how difficult would it be to turn them into distributed representations???

 

80s band correlator

80s top 10 lists, correlates vaguely-related groups/singers together

designed to maybe recommend similar music (or, really, music that you might also like)

 

Word-by-word sentence model (sentence gestalt)

what happens if you feed it �The Kool-Aid stirred the schoolgirl with a bus-driver�???

if something has never before played the role of (e.g.) agent, then it will never be assigned that role

so it might do ok at easy tasks, but it�s not doing anything hard/interesting�

it can manage to generalise to filler roles, e.g. �the person ate something

it also manages passive, since 20% of sentences in the training set were passive

so it can get �teacher was kissed by the busdriver� right, for instance

 

this reminds me of Avi Pfeffer: neural nets are the second best way to do most things

 

could this be used for subordinate clauses???

only if you had more than one context area

and ideally you want the context areas to be able to store non-lexical information, e.g. current visual scene, or emotional state

 

once the stuff gets copied into the context area, the word order information is basically lost, right???

 

what�s interesting about this is that you feed it a sentence word by word, and ask it questions after each word about the roles being played so far by previous words, so that by the end of (say) a 5-word sentence, you can ask it who the agent is (information which might have been presented at the very beginning of the sentence) and it�s been able to hold onto it all simultaneously

 

how would you build a biologically-plausible net that can copy itself into different sections to build up a multi-word context like this???

 

�how many animals did Moses bring on the ark?� J

 

can�t handle recursive structure of language (e.g. �the cow that the rat bit grazed�)

 

how, in fact, would you represent agent vs recipient in a visual scene???

 

you do feed it function words like �in� and �was�, you just don�t test it on them � so it represents them in the context layer and relies heavily on them as clues

 

 

 

To do

ask sean(???) for pdf detailing hierarchical letter-task